17 research outputs found

    Sensor fusion of motion-based sign language interpretation with deep learning

    Get PDF
    Sign language was designed to allow hearing-impaired people to interact with others. Nonetheless, knowledge of sign language is uncommon in society, which leads to a communication barrier with the hearing-impaired community. Many studies of sign language recognition utilizing computer vision (CV) have been conducted worldwide to reduce such barriers. However, this approach is restricted by the visual angle and highly affected by environmental factors. In addition, CV usually involves the use of machine learning, which requires collaboration of a team of experts and utilization of high-cost hardware utilities; this increases the application cost in real-world situations. Thus, this study aims to design and implement a smart wearable American Sign Language (ASL) interpretation system using deep learning, which applies sensor fusion that “fuses” six inertial measurement units (IMUs). The IMUs are attached to all fingertips and the back of the hand to recognize sign language gestures; thus, the proposed method is not restricted by the field of view. The study reveals that this model achieves an average recognition rate of 99.81% for dynamic ASL gestures. Moreover, the proposed ASL recognition system can be further integrated with ICT and IoT technology to provide a feasible solution to assist hearing-impaired people in communicating with others and improve their quality of life

    Methods to detect and reduce driver stress: a review

    Get PDF
    Automobiles are the most common modes of transportation in urban areas. An alert mind is a prerequisite while driving to avoid tragic accidents; however, driver stress can lead to faulty decision-making and cause severe injuries. Therefore, numerous techniques and systems have been proposed and implemented to subdue negative emotions and improve the driving experience. Studies show that conditions such as the road, state of the vehicle, weather, as well as the driver’s personality, and presence of passengers can affect driver stress. All the above-mentioned factors significantly influence a driver’s attention. This paper presents a detailed review of techniques proposed to reduce and recover from driving stress. These technologies can be divided into three categories: notification alert, driver assistance systems, and environmental soothing. Notification alert systems enhance the driving experience by strengthening the driver’s awareness of his/her physiological condition, and thereby aid in avoiding accidents. Driver assistance systems assist and provide the driver with directions during difficult driving circumstances. The environmental soothing technique helps in relieving driver stress caused by changes in the environment. Furthermore, driving maneuvers, driver stress detection, driver stress, and its factors are discussed and reviewed to facilitate a better understanding of the topic

    Machine-Learning-Enabled Virtual Screening for Inhibitors of Lysine-Specific Histone Demethylase 1

    Get PDF
    A machine learning approach has been applied to virtual screening for lysine specific demethylase 1 (LSD1) inhibitors. LSD1 is an important anti-cancer target. Machine learning models to predict activity were constructed using Morgan molecular fingerprints. The dataset, consisting of 931 molecules with LSD1 inhibition activity, was obtained from the ChEMBL database. An evaluation of several candidate algorithms on the main dataset revealed that the support vector regressor gave the best model, with a coefficient of determination (R2) of 0.703. Virtual screening, using this model, identified five predicted potent inhibitors from the ZINC database comprising more than 300,000 molecules. The virtual screening recovered a known inhibitor, RN1, as well as four compounds where activity against LSD1 had not previously been suggested. Thus, we performed a machine-learning-enabled virtual screening of LSD1 inhibitors using only the structural information of the molecules

    A Smartphone-Based Driver Safety Monitoring System Using Data Fusion

    No full text
    This paper proposes a method for monitoring driver safety levels using a data fusion approach based on several discrete data types: eye features, bio-signal variation, in-vehicle temperature, and vehicle speed. The driver safety monitoring system was developed in practice in the form of an application for an Android-based smartphone device, where measuring safety-related data requires no extra monetary expenditure or equipment. Moreover, the system provides high resolution and flexibility. The safety monitoring process involves the fusion of attributes gathered from different sensors, including video, electrocardiography, photoplethysmography, temperature, and a three-axis accelerometer, that are assigned as input variables to an inference analysis framework. A Fuzzy Bayesian framework is designed to indicate the driver’s capability level and is updated continuously in real-time. The sensory data are transmitted via Bluetooth communication to the smartphone device. A fake incoming call warning service alerts the driver if his or her safety level is suspiciously compromised. Realistic testing of the system demonstrates the practical benefits of multiple features and their fusion in providing a more authentic and effective driver safety monitoring

    Mobile Healthcare for Automatic Driving Sleep-Onset Detection Using Wavelet-Based EEG and Respiration Signals

    No full text
    Driving drowsiness is a major cause of traffic accidents worldwide and has drawn the attention of researchers in recent decades. This paper presents an application for in-vehicle non-intrusive mobile-device-based automatic detection of driver sleep-onset in real time. The proposed application classifies the driving mental fatigue condition by analyzing the electroencephalogram (EEG) and respiration signals of a driver in the time and frequency domains. Our concept is heavily reliant on mobile technology, particularly remote physiological monitoring using Bluetooth. Respiratory events are gathered, and eight-channel EEG readings are captured from the frontal, central, and parietal (Fpz-Cz, Pz-Oz) regions. EEGs are preprocessed with a Butterworth bandpass filter, and features are subsequently extracted from the filtered EEG signals by employing the wavelet-packet-transform (WPT) method to categorize the signals into four frequency bands: α, β, θ, and δ. A mutual information (MI) technique selects the most descriptive features for further classification. The reduction in the number of prominent features improves the sleep-onset classification speed in the support vector machine (SVM) and results in a high sleep-onset recognition rate. Test results reveal that the combined use of the EEG and respiration signals results in 98.6% recognition accuracy. Our proposed application explores the possibility of processing long-term multi-channel signals

    Smart Wearable Hand Device for Sign Language Interpretation System With Sensors Fusion

    No full text

    American Sign Language Recognition Using Leap Motion Controller with Machine Learning Approach

    No full text
    Sign language is intentionally designed to allow deaf and dumb communities to convey messages and to connect with society. Unfortunately, learning and practicing sign language is not common among society; hence, this study developed a sign language recognition prototype using the Leap Motion Controller (LMC). Many existing studies have proposed methods for incomplete sign language recognition, whereas this study aimed for full American Sign Language (ASL) recognition, which consists of 26 letters and 10 digits. Most of the ASL letters are static (no movement), but certain ASL letters are dynamic (they require certain movements). Thus, this study also aimed to extract features from finger and hand motions to differentiate between the static and dynamic gestures. The experimental results revealed that the sign language recognition rates for the 26 letters using a support vector machine (SVM) and a deep neural network (DNN) are 80.30% and 93.81%, respectively. Meanwhile, the recognition rates for a combination of 26 letters and 10 digits are slightly lower, approximately 72.79% for the SVM and 88.79% for the DNN. As a result, the sign language recognition system has great potential for reducing the gap between deaf and dumb communities and others. The proposed prototype could also serve as an interpreter for the deaf and dumb in everyday life in service sectors, such as at the bank or post office

    Wearable Glove-Type Driver Stress Detection Using a Motion Sensor

    No full text
    corecore